Skip to content

Conversation

DanielVyazhev
Copy link
Collaborator

Cross-entropy calculation scripts and folder/dataset

from reasoning_fine_tune.utils.seed import set_seed
from reasoning_fine_tune.utils.validation import validate_mmlu_answer

def estimate_dataset(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

А мы бы не хотели попробовать параметризовать функцию для подсчета энтропии одного токена? Кажется, что если научить ее принимать compute_entropy_from_logits снаружи, то можно избежать дубликации кода. Что скажешь?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ну и имена полей можно тогда тоже кастомные позволит задавать: field_ans и прочее

return_dict=True
)

last_token_logits = outputs.logits[:, -1,:] # [batch_size, vocab_size]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

А вот это прям хорошая находка! Надо нам и для подсчета single token entropy переехать на outputs.logits

@aigoncharov
Copy link
Member

А для фишки сделаешь разбивку?

@aigoncharov aigoncharov self-assigned this Aug 2, 2025
Copy link

@yndx-vyazhev yndx-vyazhev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add data splits and trainer_state

@DanielVyazhev DanielVyazhev merged commit 0b918bc into LabARSS:main Aug 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants